From The Low
Beyond.
©1996-©2001 by Eliezer S. Yudkowsky.
All rights reserved.
The address of this document is http://sysopmind.com/singularity.html.
If you found it elsewhere, please visit the foregoing link for the most recent
version.
Created: |
11/18/1996 |
|
|
Updated: |
05/27/2001 |
|
|
If computing speeds double every two years,
what happens when computer-based AIs are doing the research?
Computing speed doubles every two years.
Computing speed doubles every two years of work.
Computing speed doubles every two subjective years of work.
Two years after Artificial Intelligences reach human equivalence, their
speed doubles. One year later, their speed doubles again.
Six months - three months - 1.5 months ... Singularity.
Plug in the numbers for current computing speeds, the current doubling time,
and an estimate for the raw processing power of the human brain, and the
numbers match in: 2021.
But personally, I'd like to do it sooner.
It began three and a half billion years ago in a pool of muck, when a
molecule made a copy of itself and so became the ultimate ancestor of all
earthly life.
It began four million years ago, when brain volumes began climbing rapidly
in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.
In less than thirty years, it will end.
At some point in the near future, someone will come up with a method of
increasing the maximum intelligence on the planet - either coding a true Artificial
Intelligence or enhancing
human intelligence. An enhanced human would be better at thinking up
ways of enhancing humans; would have an "increased capacity for
invention". What would this increased ability be directed at?
Creating the next generation of enhanced humans.
And what would those doubly enhanced minds do? Research methods on
triply enhanced humans, or build AI minds operating at computer speeds.
And an AI would be able to reprogram itself, directly, to run faster -
or smarter. And then our crystal ball explodes, "life as we
know it" is over, and everything we know goes out the window.
"Here I had tried a straightforward extrapolation
of technology, and found myself precipitated over an abyss. It's a
problem we face every time we consider the creation of intelligences greater
than our own. When this happens, human history will have reached a kind of
singularity - a place where extrapolation breaks down and new models must be
applied - and the world will pass beyond our understanding."
-- Vernor
Vinge, True Names and Other
Dangers, p. 47.
There are multiple paths to the Singularity. Nanotechnology - the ability to build
computers atom by atom and rewire brains neuron by neuron. Artificial
Intelligence, self-understanding and self-enhancing seed AI. We could bootstrap our
way to the Singularity via the relatively mild enhanced humans produced by neurohacking.
Direct neuron-to-silicon interfaces could improve human intelligence or
computer intelligence or both. Or some completely unanticipated
breakthrough could occur.
A civilization with high technology is unstable; it ends when the species
destroys itself or improves on itself. If the current trends continue -
if we don't run up against some unexpected theoretical cap on intelligence, or
turn the Earth into a radioactive wasteland, or bury the planet under a tidal
wave of voracious self-reproducing nanodevices - the Singularity is
inevitable. The most-quoted estimate for the Singularity is 2035 - within
your lifetime! - although many, including I, think that the Singularity may occur
substantially sooner.
Some terminology, due to Vernor Vinge's Hugo-winning A Fire Upon The Deep:
Power - An entity from beyond the Singularity.
Transcend, Transcended, Transcendence - The act of reprogramming oneself
to be smarter, reprogramming (with one's new intelligence) to be smarter still,
and so on ad Singularitum. The "Transcend" is the
metaphorical area where the Powers live.
Beyond - The grey area between being human and being a Power; the domain
inhabited by entities smarter than human, but not possessing the technology to
reprogram themselves directly and Transcend.
"I imagine bugs and girls have a dim perception that Nature played a
cruel trick on them, but they lack the intelligence to really comprehend its
magnitude."
-- Calvin and Hobbes
But why should the Powers be so much more than we are now? Why
not assume that we'll get a little smarter, and that's it?
Consider the sequence 1, 2, 4, 8, 16, 32. Consider the iteration of
F(x) = (x + x). Every couple of years, computer performance
doubles. (1) That is the
demonstrated rate of improvement as overseen by constant, unenhanced minds -
progress according to mortals.
Right now the amount of networked silicon computing power on the planet is
slightly above the power of a human brain. The power of a human brain is
10^17 ops/sec, or one hundred million billion operations per second (2), versus a billion or so computers on
the Internet with somewhere between 100 millions ops/sec and 1 billion ops/sec
apiece. The total amount of computing power on the planet is the
amount of power in a human brain, 10^17 ops/sec, multiplied by the number of
humans, presently six billion or 6x10^9. The amount of artificial
computing power is so small as to be irrelevant, not because there are so many
humans, but because of the sheer raw power of a single human brain.
At the old rate of progress, when the original Singularity calculations were
performed in 1988 (3), computers
were expected to reach human-equivalent levels - 10^17 floating-point
operations per second, or one hundred petaflops - at around 2035. But at
that rate of progress, one-teraflops machines were expected in 2000; as it
turned out, one-teraflops machines were around in 1996, when this document was
first written. In 1998 the top speed was 3.2 teraflops, and in 1999 IBM
announced the Blue
Gene project to build a petaflops machine by 2005. So the old
estimates may be a little conservative.
Once we have human-equivalent computers, the amount of computing power on
the planet is equal to the number of humans plus the number of
computers. The amount of intelligence available takes a huge jump.
Ten years later, humans become a vanishing quantity in the equation.
That doubling sequence is actually a pessimistic projection, because
it assumes that computing power continues to double at the same rate. But
why? Computer speeds don't double due to some inexorable physical law,
but because researchers and engineers find ways to make faster chips. If
some of the researchers and engineers are themselves computers...
A group of human-equivalent computers spends 2 years to double computer
speeds. Then they spend another 2 subjective years, or 1 year in
human terms, to double it again. Then they spend another 2 subjective
years, or six months, to double it again. After four years total, the
computing power goes to infinity.
That is the "Transcended" version of the doubling sequence.
Let's call the "Transcend" of a sequence {a0, a1, a2...}
the function where the interval between an and an+1
is inversely proportional to an. (4). So a Transcended doubling
function starts with 1, in which case it takes 1 time-unit to go to 2.
Then it takes 1/2 time-units to go to 4. Then it takes 1/4 time-units to
go to 8. This function, if it were continuous, would be the hyperbolic
function y = 2/(2
- x). When x
= 2, then
(2 - x) = 0 and y
= infinity. The behavior at that point is known
mathematically as a singularity.
And the Transcended doubling sequence is also a pessimistic projection,
not a Singularity at all, because it assumes that only speed is
enhanced. What if the quality of thought were enhanced?
Right now, two years of work - well, these days, eighteen months of work.
Eighteen subjective months of work suffices to double computing speeds.
Shouldn't this improve a bit with thought-sharing and eidetic memories?
Shouldn't this improve if, say, the total sum of human scientific knowledge is
stored in predigested, cognitive, ready-to-think format? Shouldn't this
improve with short-term memories capable of holding the whole of human
knowledge? A human-equivalent AI isn't "equivalent" - if
Kasparov had had even the smallest, meanest automatic chess-playing program
integrated solidly with his intuitions, he would have beat Deep Blue into a
pulp. That's The AI
Advantage: Simple tasks carried out at blinding speeds and without
error, conscious tasks carried out with perfect memory and total self-awareness.
I haven't even started on the subject of AIs redesigning their
cognitive architectures, although they'll have a far easier time of it than we
would - especially if they can make backups. Transcended doubling might
run up against the laws of physics before reaching infinity... but even the
laws of physics as now understood would allow one gram (more or less) to
store and run the entire human race at a million subjective years per
second. (5).
Let's take a deep breath and think about that for a moment. One
gram. The entire human race. One million years per
second. That means, using only this planetary mass for computing power,
it would be possible to support more people than the entire Universe could
support if biological humans colonized every single planet. It
means that, in a single day, a civilization could live over 80 billion
years, several times older than the age of the Universe to date.
The peculiar thing is that most people who talk about "the laws of
physics" setting hard limits on Powers would never even dream of setting
the same limits on a (merely) galaxy-spanning civilization of (normal) humans a
(brief) billion years old. Part of that is simply a cultural convention
of science fiction; interstellar civilizations can break any physical law they
please, because the readers are used to it. But part of that is because
scientists and science-fiction authors have been taught, so many times, that
Ultimate Unbreakable Limits usually fall to human ingenuity and a few
generations of time. Nobody dares say what might be possible a billion
years from now because that is a simply unimaginable amount of time.
We know that change crept at a snail's pace a mere millennium ago, and that even
a hundred years ago it would have been impossible to place correct
limits on the ultimate power of technology. We know that the past could
never have placed limits on the present, and so we don't try to place limits on
the future. But with transhumans, the analogy is not to Lord Kelvin, nor
Aristotle, nor to a hunter-gatherer - all of whom had human intelligence - but
to a Neanderthal. With Powers, to a fish. And yet, because the
power of higher intelligence is not as publicly recognized as the power of a
few million years - because we have no history of naysayers being
embarrassed by transhumans instead of mere time - some of us still sit,
grunting around the fire, setting ultimate limits on the sharpness of spears;
some of us still swim about, unblinking, unable to engage in abstract thought,
but knowing that the entire Universe is, must be, wet.
To convey the rate of progress driven by smarter researchers, I
needed to invent a function more complex than the doubling function used
above. We'll call this new function T(n). You can think of T(n) as
representing the largest number conceivable to someone with an n-neuron
brain. More formally, T(n)
is defined as the longest block of 1s produced by any halting n-state
Turing
Machine acting on an initially blank tape. If you're familiar with
computers but not Turing Machines, consider T(n) to be the largest number
that can be produced by a computer program with n instructions.
Or, if you're an information theorist, think of T(n) as the inverse
function of complexity; it produces the largest number with complexity n
or less.
The sequence produced by iterating T(n), S{n} = T(S{n - 1}),
is constant for very low values of n. S{0} is defined to be 0; a program of
length zero produces no output. This corresponds to a Universe empty of
intelligence. T(1)
= 1. This corresponds to an intelligence not capable of
enhancing itself; this corresponds to where we are now. T(2) = 3.
Here begins the leap into the Abyss. Once this function increases at all,
it immediately tapdances off the brink of the knowable. T(3) = 6? T(6) = 64?
T(64)
= vastly more than 1080, the number of atoms in the Universe. T(1080)
is something that only a Transcendent entity will ever be able to calculate,
and that only if Transcendent entities can create new Universes, maybe even new
laws of physics, to supply the necessary computing power. Even T(64) will
probably never be known to any strictly human being.
Now take the Transcended version of S{n}, starting at 2. Half a
time-unit later, we have 3. A third of a time-unit after that, 6. A
sixth later - one whole unit after this function started - we have 64. A
sixty-fourth later, 10^80. An unimaginably tiny fraction of a second
later... Singularity.
Is S{n}
really a good model of the Singularity? Of course not. "Good
model of the Singularity" is an oxymoron; that's the whole point;
the Singularity will outrun any model a human could have formulated a hundred
years ago, and the Singularity will outrun any model we formulate today.
(6)
The main objection, though, would be that S{n} is an ungrounded
metaphor. The Transcended doubling sequence models faster
researchers. It's easy to say that S{n} models smarter researchers,
but what does smarter actually mean in this context?
Smartness is the measure of what you see as obvious, what you can see
as obvious in retrospect, what you can invent, and what you can comprehend.
To be more precise about it, smartness is the measure of your semantic
primitives (what is simple in retrospect), the way in which you manipulate the
semantic primitives (what is obvious), the structures your semantic primitives
can form (what you can comprehend), and the way you can manipulate those
structures (what you can invent). If you speak complexity theory, the
difference between obvious and obvious in retrospect, or inventable
and comprehensible, is like the difference between NP and P.
All humans who have not suffered neural injuries have the same semantic
primitives. What is obvious in retrospect to one is obvious in
retrospect to all. (Four notes: First, by "neural
injuries" I do not mean anything derogatory - it's just that a person
missing the visual cortex will not have visual semantic primitives. If
certain neural pathways are severed, people not only lose their ability to see
colors; they lose their ability to remember or imagine
colors. Second, theorems in math may be obvious in retrospect only to
mathematicians - but anyone else who acquired the skill would have the ability
to see it. Third, to some extent what we speak of as obvious
involves not just the symbolic primitives but very short links between
them. I am counting the primitive link types as being included under
"semantic primitives". When we look at a thought-sequence and
see it as being obvious in retrospect, it is not necessarily a single
semantic primitive, but is composed of a very short chain of semantic
primitives and link types. Fourth, I apologize for my tendency to dissect
my own metaphors; I really can't help it.)
Similarly, the human cognitive architecture is universal. We all have
the same sorts of underlying mindstuff. Though the nature of this
mindstuff is not necessarily known, our ability to communicate with each other
indicates that, whatever we are communicating, it is the same on both sides.
If any two humans share a set of concepts, any structure composed of those
concepts that is understood by one will be understood by the other.
Different humans may have different degrees of the ability to manipulate
and structure concepts; different humans may see and invent different
things. The great breakthroughs of physics and engineering did not occur
because a group of people plodded and plodded and plodded for generations until
they found an explanation so complex, a string of ideas so long, that only time
could invent it. Relativity and quantum physics and buckyballs and
object-oriented programming all happened because someone put together a short,
simple, elegant semantic structure in a way that nobody had ever thought of
before. Being a little bit smarter is where revolutions come
from. Not time. Not hard work. Although hard work and time
were usually necessary, others had worked far harder and longer without
result. The essence of revolution is raw smartness.
Now think about the Singularity. Think about a chimpanzee trying to
understand integral calculus. Think about the people with damaged visual
neurology who cannot remember what it was like to see, who cannot imagine the
color red or visualize two-dimensional structures. Think about a visual
cortex with trillions of times as many neuron-equivalents. Think about
twenty thousand distinct colors in the rainbow, none a shade of any
other. Think about rotating fifty-dimensional objects. Think about
attaching semantic primitives to the pixels, so that one could see a rainbow of
ideas in the same way that we see a rainbow of colors.
Our semantic primitives even determine what we can know. Why
does anything exist at all? Nobody knows. And yet the answer is
obvious. The First Cause must be obvious. It has to be
obvious to Nothing, present in the absence of anything else, a substance
formed from -blank-, a conclusion derived without data or initial
assumptions. What is it that evokes conscious experience, the
stuff that minds are made of? We are made of conscious
experiences. There is nothing we experience more directly.
How does it work? We don't have a clue. Two and a half millennia
of trying to solve it and nothing to show for it but "I think therefore I
am." The solutions seem to be necessarily simple, yet are
demonstrably imperceptible. Perhaps the solutions operate outside the
representations that can be formed with the human brain.
If so, then our descendants, successors, future selves will figure out the
semantic primitives necessary and alter themselves to perceive them. The
Powers will dissect the Universe and the Reality until they understand why
anything exists at all, analyze neurons until they understand qualia. And
that will only be the beginning. It won't end there. Why
should there be only two hard problems? After all, if not for humans, the
Universe would apparently contain only one hard problem, for how could a
non-conscious thinker formulate the hard problem of consciousness? Might
there be states of existence beyond mere consciousness - transsentience?
Might solving the nature of reality create the ability to create new Universes,
manipulate the laws of physics, even alter the kind of things that can be
real - "ontotechnology"? That's what the Singularity
is all about.
So before you talk about life as a Power or the Utopia to come - a favorite
pastime of transhumanists and Extropians is to discuss the problems of uploading, life after being uploaded,
and so on - just remember that you probably have a much better chance of
solving both hard problems than you do of making a valid statement about the
future. This goes for me too. I'll stand by everything I said about
humans, including our inability to understand certain things, but everything I
said about the Powers is almost certainly wrong. "They'll figure out
the semantic primitives necessary and alter themselves to perceive
them." Wrong. "Figure out." "Semantic
primitives." "Alter." "Perceive." I
would bet on all of these terms becoming obsolete after the Singularity.
There are better ways and I'm sure They - or It, or [sound of exploding brain]
will "find them".
I would like to introduce a unit of post-Singularity progress, the
Perceptual Transcend or PT.
[Brief pause while audience collapses in helpless laughter.]
A Perceptual Transcend occurs when all things that were comprehensible
become obvious in retrospect, and all things that were inventable
become obvious. A Perceptual Transcend occurs when the semantic structures
of one generation become the semantic primitives of the next. To put it
another way, one PT from now, the whole of human knowledge becomes
perceivable in a single flash of experience, in the same way that we now
perceive an entire picture at once.
Computers are a PT above humans when it comes to arithmetic - sort of.
While we need to manipulate an entire precarious pyramid of digits, rows and
columns in order to multiply 62305 by 10358, a computer can spit out the answer
- 645355190 - in a single obvious step. These computers aren't
actually a PT above us at all, for two reasons. First of all, they just
handle numbers up to two billion instead of 9; after that they need to
manipulate pyramids too. Far more importantly, they don't notice anything
about the numbers they manipulate, as humans do. If you multiply 23704 by
14223, using the wedding-cake method of multiplication, you won't multiply
23704 by 2 twice in a row; you'll just steal the results from last time.
If one of the interim results is 12345 or 99999 or 314159, you'll notice that,
too. The way computers manipulate numbers is actually less
powerful than the way we manipulate numbers.
Would the Powers settle for less? A PT above us, multiplication is
carried out automatically but with full attention to interim results,
numbers that happen to be prime, and the like. If I were designing one of
the first Powers - and, down at the Singularity
Institute, this is what we're doing - I would create an entire subsystem
for manipulating numbers, one that would pick up on primality, complexity, and
all the numeric properties known to humanity. A Power would understand why
62305 times 10358 equals 645355190, with the same understanding that would be
achieved by a top human mathematician who spent hours studying all the numbers
involved. And at the same time, the Power will multiply the two numbers
automatically.
For such a Power, to whom numbers were true semantic primitives, Fermat's
Last Theorem and the Goldbach Conjecture and the Riemann Hypothesis might be obvious.
Somewhere in the back of its mind, the Power would test each statement with a
million trials, subconsciously manipulating all the numbers involved to find why
they were not the sum of two cubes or why they were the sum of two
primes or why their real part was equal to one-half. From there,
the Power could intuit the most basic, simple solution simply by
generalizing. Perhaps human mathematicians, if they could perform the
arithmetic for a thousand trials of the Riemann Hypothesis, examining every
intermediate step, looking for common properties and interesting shortcuts,
could intuit a formal solution. But they can't, and they certainly can't
do it subconsciously, which is why the Riemann Hypothesis remains unobvious and
unproven - it is a conceptual structure instead of a conceptual primitive.
Perhaps an even more thought-provoking example is provided by our visual
cortex. On the surface, the visual cortex seems to be an image processor.
In a modern computer graphics engine, an image is represented by a
two-dimensional array of pixels (7).
To rotate this image - to cite one operation - each pixel's rectangular
coordinates {x, y} are converted to polar coordinates {theta, r}. All thetas,
representing the angle, have a constant added. The polar coordinates are
then converted back to rectangular. There are ways to optimize this
process, and ways to account for intersecting and empty pixels on the new
array, but the essence is clear: To perform an operation on an entire
picture, perform the operation on each pixel in that picture.
At this point, one could say that a Perceptual Transcend depends on what
level you're looking at the operation. If you view yourself as carrying
out the operation pixel by pixel, it is an unimaginably tedious cognitive
structure, but if you view the whole thing in a single lump, it is a cognitive
primitive - a point made in Hofstadter's Ant Fugue when discussing ants and
colonies. Not very exciting unless it's Hofstadter explaining it, but
there's more to the visual cortex than that.
For one thing, we consciously experience redness. (If you're not sure
what conscious experience
a.k.a. "qualia" means, the short version is that you are not the one
who speaks your thoughts, you are the one who hears your
thoughts.) Qualia are the stuff making up the indescribable difference
between red and green.
The term "semantic primitive" describes more than just the level
at which symbols are discrete, compact objects. It describes the level of
conscious perception. Unlike the computer manipulating numbers formed of
bits, and like the imagined Power manipulating theorems formed of numbers, we
don't lose any resolution in passing from the pixel level to the picture
level. We don't suddenly perceive the idea "there is a bear in front
of me"; we see a picture of a bear, containing millions of pixels, every
one of which is consciously experienced simultaneously. A Perceptual
Transcend isn't "just" the imposition of a new cognitive level; it
turns the cognitive structures into consciously experienced primitives.
"To put it another way, one PT from now, the whole of human knowledge
becomes perceivable in a single flash of experience, in the same way that
we now perceive an entire picture at once."
Of course, the PT won't be used as a post-Singularity unit of
progress. Even if it were initially, it won't be too long before "PT"
itself is Transcended and the Powers jump out of the system yet again.
After all, the Singularity is ultimately as far beyond me, the author, as it is
beyond any other human, and so my PTs will be as worthless a description as the
doubling sequence discarded so long ago. Even if we accept the PT as the
basic unit of measure, it simply introduces a secondary Singularity.
Maybe the Perceptual Transcends will occur every two consciously experienced
years at first, but then will occur every conscious year, and then every
conscious six months - get the picture?
It's like the "Birthday Cantatatata..." in Hofstadter's
book Godel, Escher, Bach.
You can start with the sequence {1, 2, 3, 4 ...} and jump out of it to w
(omega), the symbol for infinity. But then one has {w, w +
1, w + 2 ... }, and we jump out again to 2w. Then 3w,
and 4w, and w2 and w3 and ww
and w^(ww) and higher towers of w until we jump
out to the ordinal e0, which includes all exponential towers
of ws.
The PTs may introduce a second Singularity, and a third Singularity, and a
fourth, until Singularities are coming faster and faster and the first w-Singularity
is imminent -
Or the Powers may simply jump beyond that system. The Birthday
Cantatatata... was written by a human - admittedly Douglas Hofstadter,
but still a human - and the concepts involved in it may be Transcended by the
very first transhuman.
The Powers are beyond our ability to comprehend.
Get the picture?
It's hard to appreciate the Singularity properly without first appreciating
really large numbers. I'm not talking about little tiny numbers, barely
distinguishable from zero, like the number of atoms in the Universe or the
number of years it would take a monkey to duplicate the works of
Shakespeare. I invite you to consider what was, circa 1977, the largest
number ever to be used in a serious mathematical proof. The proof, by Ronald L. Graham, is an upper
bound to a certain question of Ramsey theory. In order to explain the
proof, one must introduce a new notation, due to Donald E. Knuth in the
article Coping With Finiteness. The notation is usually a small
arrow, pointing upwards, here abbreviated as ^. Written as a function:
int arrow (int num, int power, int arrownum) {
int answer = num;
if (arrownum == 0)
return num * power;
for (int i = 1; i < power; i++)
answer = arrow(num, answer, arrownum - 1);
return answer;
} // end arrow
2^4 = 24 = 16.
3^^4 = 3^(3^(3^3)) = 3^(3^27) = 37,625,597,484,987
7^^^^3 = 7^^^(7^^^7).
3^3 = 3 * 3 * 3 = 27. This number is small enough to visualize.
3^^3 = 3^(3^3) = 3^27 = 7,625,597,484,987. Larger than 27, but so
small I can actually type it. Nobody can visualize seven trillion of
anything, but we can easily understand it as being on roughly the same order
as, say, the gross national product.
3^^^3 = 3^^(3^^3) = 3^(3^(3^(3^...^(3^3)...))). The "..." is
7,625,597,484,987 threes long. In other words, 3^^^3 or arrow(3, 3, 3) is an exponential tower of threes 7,625,597,484,987
levels high. The number is now beyond the human ability to understand,
but the procedure for producing it can be visualized. You take x=1.
You let x equal 3^x. Repeat seven trillion times.
While the very first stages of the number are far too large to be contained in
the entire Universe, the exponential tower, written as
"3^3^3^3...^3", is still so small that it could be stored on a modern
supercomputer.
3^^^^3 = 3^^^(3^^^3) = 3^^(3^^(3^^...^^(3^^3)...)). Both the number
and the procedure for producing it are now beyond human visualization, although
the procedure can be understood. Take a number x=1. Let x
equal an exponential tower of threes of height x. Repeat 3^^^3
times, where 3^^^3 equals an exponential tower seven trillion threes high.
And yet, in the words of Martin Gardner: "3^^^^3 is unimaginably
larger than 3^^^3, but it is still small as finite numbers go, since most
finite numbers are very much larger."
And now, Graham's number. Let x equal 3^^^^3, or the
unimaginable number just described above. Let x equal 3^^^^^^^(x
arrows)^^^^^^^3. Repeat 63 times, or 64 including the starting 3^^^^3.
Graham's number is far beyond my ability to grasp. I can describe it,
but I cannot properly appreciate it. (Perhaps Graham can appreciate it,
having written a mathematical proof that uses it.) This number is far
larger than most people's conception of infinity. I know that it
was larger than mine. My sense of awe when I first encountered this
number was beyond words. It was the sense of looking upon something so
much larger than the world inside my head that my conception of the
Universe was shattered and rebuilt to fit. All theologians should face a
number like that, so they can properly appreciate what they invoke by talking
about the "infinite" intelligence of God.
My happiness was completed when I learned that the actual answer to
the Ramsey problem that gave birth to that number - rather than the upper
bound - was probably six.
Why was all of this necessary, mathematical aesthetics aside? Because
until you understand the hollowness of the words "infinity",
"large" and "transhuman", you cannot appreciate the
Singularity. Even appreciating the Singularity is as far beyond us as
visualizing Graham's number is to a chimpanzee. Farther beyond us than
that. No human analogies will ever be able to describe the Singularity,
because we are only human.
The number above was forged of the human mind. It is nothing but a
finite positive integer, though a large one. It is composite and odd,
rather than prime or even; it is perfectly divisible by three. Encoded in
the decimal digits of that number, by almost any encoding scheme one cares to
name, are all the works ever written by the human hand, and all the works that
could have been written, at a hundred thousand words per minute, over the age
of the Universe raised to its own power a thousand times. And yet, if we
add up all the base-ten digits the result will be divisible by nine. The
number is still a finite positive integer. It may contain Universes
unimaginably larger than this one, but it is still only a number. It is a
number so small that the algorithm to produce it can be held in a single human
mind.
The Singularity is beyond that. We cannot pigeonhole it by stating
that it will be a finite positive integer. We cannot say anything at all
about it, except that it will be beyond our understanding.
If you thought that Knuth's arrow notation produced some fairly large
numbers, what about T(n)? How many states does a Turing machine
need to implement the calculation above? What is the complexity of Graham's number, C(Graham)?
Probably on the order of 100. And moreover, T(C(Graham)) is likely to be
much, much larger than Graham's number. Why go through x = 3^(x
^s)^3 only 64 times? Why not 3^^^^3 times? That'd probably be
easier, since we already need to generate 3^^^^3, but not 64. And with
the extra space, we might even be able to introduce an even more
computationally complex algorithm. In fact, Knuth's arrow notation may
not be the most powerful algorithm that fits into C(Knuth) states.
T(n) is the metaphor for the growth rate of a self-enhancing entity
because it conveys the concept of having additional intelligence with which to
enhance oneself. I don't know when T(n) passes beyond the
threshold of what human mathematicians can, in theory, calculate.
Probably more than n=10 and less than n=100. The point is
that after a few iterations, we wind up with T(4294967296). Now, I don't
know what T(4294967296) will be equal to, but the winning Turing machine will
probably generate a Power whose purpose is to think of a really large number. That's
what the term "large" means.
It's all very well to talk about cognitive primitives and obviousness, but
again - what does smarter mean? The meaning of smart can't
be grounded in the Singularity - I haven't been there yet. So
what's my practical definition?
"The toughest challenge for a writer is a character
brighter than the author. It's not impossible. Puzzles the writer
needs months to solve, or to design, the character may solve in moments.
But God help the writer if his abnormally bright character is wrong!"
-- Larry Niven
"Of course, I never wrote
the 'important' story, the sequel about the first amplified human. Once I
tried something similar. John Campbell's letter of rejection began:
'Sorry - you can't write this story. Neither can anyone else.'"
-- Vernor Vinge
Smartness is that quality which makes it impossible to write
a story about a character smarter than you are. You can write about
super-fast thinkers, eidetic memories, lightning calculators; a character who
learns a dozen languages in a week, who can read a textbook in an hour, or who
can invent all kinds of wonderful stuff - as long as you don't have to produce
the actual invention. But you can't write a character with a higher level
of emotional maturity, a character who can spot the obvious solution you
missed, a character who knows (and can tell the reader) the Meaning Of Life, a
character with superhuman self-awareness. Not unless you can do these
things yourself.
Let's take a concrete example, the story Flowers for Algernon (later
the movie Charly), by Daniel Keyes. (I'm afraid I'll have to tell
you how the story comes out, but it's a Character story, not an Idea story, so
that shouldn't spoil it.) Flowers for Algernon is about a
neurosurgical procedure for intelligence enhancement. This procedure was
first tested on a mouse, Algernon, and later on a retarded human, Charlie
Gordon. The enhanced Charlie has the standard science-fictional set of
superhuman characteristics; he thinks fast, learns a lifetime of knowledge in a
few weeks, and discusses arcane mathematics (not shown). Then the mouse,
Algernon, gets sick and dies. Charlie analyzes the enhancement procedure
(not shown) and concludes that the process is basically flawed. Later,
Charlie dies.
That's a science-fictional enhanced human. A real enhanced human would
not have been taken by surprise. A real enhanced human would realize that
any simple intelligence enhancement will be a net evolutionary disadvantage -
if enhancing intelligence were a matter of a simple surgical procedure, it
would have long ago occurred as a natural mutation. This goes double for
a procedure that works on rats! (As far as I know, this never occurred to
Keyes. I selected Flowers, out of all the famous stories of intelligence
enhancement, because, for reasons of dramatic unity, this story shows what
happens to be the correct outcome.)
Note that I didn't dazzle you with an abstruse technobabble explanation for
Charlie's death; my explanation is two sentences long and can be understood by
someone who isn't an expert in the field. It's the simplicity of
smartness that's so impossible to convey in fiction, and so shocking when we
encounter it in person. All that science fiction can do to show
intelligence is jargon and gadgetry. A truly ultrasmart Charlie Gordon
wouldn't have been taken by surprise; he would have deduced his probable fate
using the above, very simple, line of reasoning. He would have accepted
that probability, rearranged his priorities, and acted accordingly until his
time ran out - or, more probably, figured out an equally simple and
obvious-in-retrospect way to avoid his fate. If Charlie Gordon had really
been ultrasmart, there would have been no story.
There are some gaps so vast that they make all problems new. Imagine
whatever field you happen to be an expert in - neuroscience, programming,
plumbing, whatever - and consider the gap between a novice, just approaching a
problem for the first time, and an expert. Even if a thousand novices try
to solve a problem and fail, there's no way to say that a single expert
couldn't solve the problem casually, offhandedly. If a hundred
well-educated physicists try to solve a problem and fail, an Einstein might
still be able to succeed. If a thousand twelve-year-olds try for a year
to solve a problem, it says nothing about whether or not an adult is likely to
be able to solve the problem. If a million hunter-gatherers try to solve
a problem for a century, the answer might still be obvious to any educated twenty-first-century
human. And no number of chimpanzees, however long they try, could ever
say anything about whether the least human moron could solve the problem
without even thinking. There are some gaps so vast that they make all
problems new; and some of them, such as the gap between novice and expert, or
the gap between hunter-gatherer and educated citizen, are not even hardware
gaps - they deal not with the magic of intelligence, but the magic of knowledge,
or of lack of stupidity.
I think back to before I started studying evolutionary psychology and
cognitive science. I know that I could not then have come close to
predicting the course of the Singularity. "If I couldn't have gotten
it right then, what makes me think I can get it right now?"
I am a human, and an educated citizen, and an adult, and an expert, and a
genius... but if there is even one more gap of similar magnitude remaining
between myself and the Singularity, then my speculations will be no better than
those of an eighteenth-century scientist.
We're all familiar with individual variations in human intelligence,
distributed along the great Gaussian curve; this is the only referent most of
us have for "smarter". But precisely because these
variations fall within the design range of the human brain, they're nothing out
of the ordinary. One of the very deep truths about the human mind is that
evolution designed us to be stupid - to be blinded by ideology, to refuse to
admit we're wrong, to think "the enemy" is inhuman, to be affected by
peer pressure. Variations in intelligence that fall within the normal
design range don't directly affect this stupidity. That's where we get
the folk wisdom that intelligence doesn't imply wisdom, and within the human
range this is mostly correct (8).
The variations we see don't hit hard enough to make people appreciate
what "smarter" means.
I am a Singularitarian because I have some small appreciation of how
utterly, finally, absolutely impossible it is to think like someone even
a little tiny bit smarter than you are. I know that we are all missing
the obvious, every day. There are no hard problems, only problems
that are hard to a certain level of intelligence. Move the smallest bit
upwards, and some problems will suddenly move from "impossible" to
"obvious". Move a substantial degree upwards, and all of them
will become obvious. Move a huge distance upwards...
And I know that my picture of the Singularity will still fall short
of the truth. I may not be modest, but I have my humility - if I can spot
anthropomorphisms and gaping logical flaws in every alleged transhuman in every
piece of science fiction, it follows that a slightly higher-order genius (never
mind a real transhuman!) could read this page and laugh at my lack of
imagination. Call it experience, call it humility, call it
self-awareness, call it the Principle of Mediocrity; I've crossed enough gaps
to believe there are more. I know, in a dim way, just how dumb I am.
I've tried to show the Beyondness of the Singularity by brute force, but it
doesn't take infinite speeds and PTs and ws to place something
utterly beyond us. All it takes is a little tiny bit of edge, a bit
smarter, and the Beyond stares us in the face once more. I've never
been through the Singularity. I've never been to the
Transcend. I just staked out an area of the Low Beyond. This page
is devoted to communicating a sense of awe that comes from personal
experience, and is, therefore, merely human.
From my cortex, to yours; every concept here was born of a plain old Homo
sapiens - and any impression it has made on you was likewise born of a
plain old Homo sapiens. Someone who has devoted a bit more
thought, or someone a bit more extreme; it makes no difference. Whatever
impression you got from this page has not been an accurate picture of the far
future; it has, unavoidably, been an impression of me. And I am not
the far future. Only a version of "Staring into the
Singularity" written by an actual Power could convey experience of the actual
Singularity.
Take whatever future shock this page evoked, and associate it not with the
Singularity; associate it with me, the mild, quiet-spoken fellow
infinitesimally different from the rest of humanity. Don't bother trying
to extrapolate beyond that. You can't. Nobody can - not you, not
me.
2035. Probably earlier.
Since the Internet exploded across the planet, there has been enough
networked computing power for intelligence. If the Internet were properly
reprogrammed, it would be enough to run a human brain, or a seed AI. On the
nanotechnology side, we possess machines capable of producing arbitrary DNA
sequences, and we know how to turn arbitrary DNA sequences into arbitrary
proteins (9). We have machines
- Atomic Force Probes - that can put single atoms anywhere we like, and which
have recently [1999] been demonstrated to be capable of forming atomic
bonds. Hundredth-nanometer precision positioning, atomic-scale
tweezers... the news just keeps on piling up.
If we had a time machine, 100K of information from the future could specify
a protein that built a device that would give us nanotechnology overnight. 100K could
contain the code for a seed AI. Ever since the late 90's, the Singularity
has been only a problem of software. And software is information,
the magic stuff that changes at arbitrarily high speeds. As far as
technology is concerned, the Singularity could happen tomorrow.
One breakthrough - just one major insight - in the science of protein
engineering or atomic manipulation or Artificial Intelligence, one really good day
at Webmind or Zyvex, and the door to Singularity sweeps open.
Drexler has written a
detailed, technical,
how-to book for nanotechnology. After stalling for thirty years, AI
is making a comeback. Computers are growing in power even faster
than their usual, pedestrian rate of doubling in power every two years.
Quate has constructed a 16-head parallel Scanning
Tunnelling Probe. [Written in '96.] I'm starting to work out
methods of coding a transhuman
AI. [Written in '98.] The first chemical bond has been formed
using an atomic-force microscope. The U.S. government has announced its
intent to spend hundreds of millions of dollars on nanotechnology
research. IBM has announced the Blue Gene project
to achieve petaflops (10) computing
power by 2005, with intent to crack the protein folding problem. The Singularity Institute for Artificial Intelligence,
Inc. has been incorporated as a nonprofit with the express purpose of coding a seed AI. [Written in
'00.]
The exact time of Singularity is customarily predicted by taking a trend and
extrapolating it, much as The Population Bomb predicted that we'd run
out of food in 1977. For example, population growth is hyperbolic.
(Maybe you learned it was exponential in math class, but it's hyperbolic to a much better fit than
exponential.) If that trend continues, world population reaches infinity
on Aug 17, 2027, plus or minus 1.8 years. It is, of course, impossible
for the human population to reach infinity. Some say that if we can
create AIs, then the graph might measure sentient population instead of human
population. These people are torturing the metaphor. Nobody
designed the population curve to take into account developments in AI.
It's just a curve, a bunch of numbers. It can't distort the
future course of technology just to remain on track.
If you project on a graph the minimum size of the materials we can
manipulate, it reaches the atomic level - nanotechnology
- in I forget how many years (the page vanished), but I think around 2035.
This, of course, was before the time of the Scanning
Tunnelling Microscope and "IBM" spelled out in xenon atoms.
For that matter, we now have the artificial atom
("You can make any kind of artificial atom - long, thin atoms and big,
round atoms."), which has in a sense obsoleted merely molecular
nanotechnology. As of '95, Drexler was giving the ballpark figure of 2015
(11). I suspect the timetable
has been accelerated a bit since then. My own guess would be no later
than 2010.
Similarly, computing power doubles every two years eighteen
months. If we extrapolate forty thirty fifteen years ahead
we find computers with as much raw power (10^17 ops/sec) as some
people think humans have, arriving in 2035 2025 2015. [The
previous sentence was written in 1996, revised later that year, and then
revised again in 2000; hence the peculiar numbers.] Does this mean we
have the software to spin
minds? No. Does this mean we can program smarter people?
No. Does this take into account any breakthroughs between now and
then? No. Does this take into account the laws of physics?
No. Is this a detailed model of all the researchers around the
planet? No.
It's just a graph. The "amazing constancy" of Moore's Law
entitles it to consideration as a thought-provoking metaphor of the future, but
nothing more. The Transcended doubling
sequence doesn't account for how the faster computer-based researchers can
get the physical manufacturing technology
for the next generation set up in picoseconds, or how they can beat the laws of
physics. That's not to say that such things are impossible - it
doesn't actually strike me as all that likely that modern-day physics has
really reached the ultimate bottom level. Maybe there are no
physical limits. The point is that Moore's Law doesn't explain how
physics can be bypassed.
Mathematics can't predict when the Singularity is coming. (Well, it
can, but it won't get it right.) Even the remarkably steady numbers, such
as the one describing the doubling rate of computing power, (A) describe
unaided human minds and (B) are speeding up, perhaps due to computer-aided
design programs. Statistics may be used to predict the future, but they
don't model it. What I'm trying to say here is that
"2035" is just a wild guess, and it might as well be next Tuesday.
In truth, I don't think in those terms. I do not "project"
when the Singularity will occur. I have a "target date".
I would like the Singularity to occur in 2005, which I think I would have a
reasonable chance of doing via AI if someone handed me a hundred million
dollars a year. The Singularity Institute
would like to finish up in 2008 or so.
Above all, I would really, really like the Singularity to arrive before
nanotechnology, given the virtual certainty of deliberate misuse - misuse of a
purely material (and thus, amoral) ultratechnology, one powerful enough to
destroy the planet. We cannot just sit back and wait. To
quote Michael Butler, "Waiting for the bus is a bad idea if you turn out
to be the bus driver."
The most we can say about 2035 is that it seems like a reasonable upper
bound, given the current rate of progress. The lower bound?
Thirty seconds. We may not know about all the research out there, after
all.
Maybe you don't want to see humanity replaced by a bunch of "machines"
or "mutants", even superintelligent ones? You love humanity and
you don't want to see it obsoleted? You're afraid of disturbing the
natural course of existence?
Well, tough luck. The Singularity is the natural course of
existence. Every species - at least, every species that doesn't blow
itself up - sooner or later comes face-to-face with a full-blown
superintelligence (12). It
happens to everyone. It will happen to us. It will even happen to
the first-stage transhumans or the initial human-equivalent AIs.
But just because humans become obsolete doesn't mean you become
obsolete. You are not a human. You are an intelligence which, at
present, happens to have a mind unfortunately limited to human hardware.
(13). That could
change. With any luck, all persons on this planet who live to 2035 or
2005 or whenever - and maybe some who don't
- will wind up as Powers.
Transferring a human mind into a computer system is known as
"uploading"; turning a mortal into a Power is known as
"upgrading". The archetypal upload is the Moravec Transfer,
proposed by Dr. Hans
Moravec in the book Mind Children.
(14).
NOTE: |
The key assumption of the Moravec Transfer is that we can
perfectly simulate a single neuron, which Penrose and Hameroff
would argue is untrue. (As of 1999, a lobster neuron has been successfully
replaced with $7.50 worth of parts bought at Radio Shack; this is minor
suggestive evidence, but it doesn't even come close to settling the
issue.) The following discussion assumes that either (A) the laws of
physics are computational or (B) we can build a "superneuron", a
trans-Turing computer that does the same thing a neuron does. (Penrose and Hameroff
have no objection to the latter proposition. If a neuron can take
advantage of deep physics to perform noncomputable operations, we can do the
same thing technologically.) The scenario given also assumes sophisticated nanomedicine; i.e., nanomachines
capable of carrying out complex instructions in a biological environment. |
The Moravec Transfer gradually moves (rather than copies) a human
mind into a computer. You need never lose consciousness. (The
details which follow have been redesigned and fleshed out a bit (by yours
truly) from the original in Mind Children.)
This entire procedure has had no effect on the flow of
information in the brain, except that one neuron's worth of processing is now
being done inside a computer instead of a neuron.
Despite this, the synapses (links) between robotic neurons
are still physical; robots report the reception of neurotransmitters at
artificial dendrites and release neurotransmitters at the end of artificial
axons. In the next phase, we replace the physical synapses with software
links.
At the end of this phase, the robots are all firing their
axons, but none of them are receiving anything, none of them are affecting each
other, and none of them are affecting the computer simulation.
You have now been placed entirely inside a computer, bit by
bit, without losing consciousness. In Moravec's words, your metamorphosis
is complete.
If any of the phases seem too abrupt, the transfer of an individual neuron,
or synapse, can be spread out over as long a time as necessary. To slowly
transfer a synapse into a computer, we can use weighted factors of the physical
synapse and the computational synapse to produce the output. The
weighting would start as entirely physical and end as entirely
computational. Since we are presuming the neuron is being perfectly
simulated, the weighting affects only the flow of causality and not the actual process
of events.
Slowly transferring a neuron is a bit more difficult.
Assuming we can simulate an individual neuron, and that we
can replace neurons with robotic analogues, I think that thoroughly
demonstrates the possibility of uploading, given that consciousness is strictly
a function of neurons. (And if we have immortal souls, then uploading is
a real snap. Detach soul from brain. Copy any information
not stored in soul. Attach soul to new substrate. Upload complete.)
At this point it is customary to speculate about how one goes about eating,
drinking, walking around. People state that they are unwilling to give up
physical reality, worry about whether or not they will have sufficient
computational power to simulate a hedonistic world of their wildest desires,
and so on and so on ad nauseam. Even Vinge himself, discoverer of
the Singularity, has gone on record as wondering whether one's true self would
be diluted by Transcendence.
I hope that by this point in the page you have been sufficiently impressed
by the power and scope and incomprehensibility and general Transcendence of the
Singularity that these speculations sound silly. If you wish to
remain undiluted, you will be able to arrange it. You will be able to
make backups. You will be able to preserve your personality regardless of
substrate. The only folks who have to worry about being unwillingly
diluted are the first humans to Transcend, and even they may have nothing to
worry about if there's a Friendly, AI-born superintelligence to act as
transition guide.
Of course, it may be that any being of sufficient intelligence wants
to be diluted. Exercising anxiety over that possibility seems
spectacularly pointless, analogous to children worrying that, as adults, they
will no longer want to be thoughtlessly cruel to other children. If you want
to be diluted, it's not a wrongness that we should worry about.
Maybe, after Transcending, you'll be different. If that is so, then
that change is inevitable and there is nothing you can do about it. The
human brain has a finite number of neurons, and therefore a finite number of
possible states. Eventually, you will die, go into an eternal loop, or
Transcend. In the long run... the really long run... mortality
isn't an option.
Likewise, there's absolutely no point in worrying that hostile Powers will
inevitably wipe out humanity. If it turns out that all goals are
ultimately arbitrary, then it is conceivable that a badly programmed Power
could wind up with goals making it hostile to humanity; this is an engineering
risk, and minimizing it is an engineering
task. But emotions like "selfishness" and
"resentment" do
not spontaneously appear in artificial intelligences, hackneyed
science-fictional plot devices to the contrary. Resentment is a complex
functional adaptation which evolved in humans over the course of millions of
years; it does not simply appear out of nowhere. Even the tendency to
evaluate your own group as more valuable is an evolved one, along with the tendency
to think in terms of "us" and "them" in the first place.
Why would a generic rational superintelligence categorize humanity as
meaningless? The only circumstances under which this would be an inevitable
conclusion is if human life is meaningless, if the lack of meaning is an
observer-independent fact. And even that wouldn't be enough to
spell out humanity's doom; the action of exterminating humanity would also have
to be meaningful, again as an observer-independent fact. Which would mean
that any sufficiently intelligent human would commit suicide. And if
that's so, one rather suspects that there's nothing we can do about it.
Ultimately, nobody knows what lies on the other side of Singularity, not
even me. And yes, it takes courage to step through that door. If
infants could choose whether or not to leave the womb, without knowing what lay
at the end of the birth canal - without knowing if anything lay at the
end of the birth canal - how many would? But beyond the birth canal is where
reality is. It's where things happen. Staying in the womb forever,
even if we could, would be pointless and sterile.
"You know, I don't
understand why humans evolved as such thoughtless, shortsighted
creatures."
"Well, it can't stay that way forever."
"You think we'll get smarter?"
"That's one of the two possibilities."
Since this document was originally written in 1996,
"nanotechnology" has gone public. I expect that everyone has
now heard of the concept of attaining complete control over the molecular
structure of matter. This would make it possible to create food from
sewage, to heal broken spinal cords, to reverse old age, to make everyone healthy
and wealthy, and to deliberately wipe out all life on the planet.
Actually, the raw, destructive military uses would probably be a lot easier
than the complex, creative uses. Anyone who's ever read a history book
gets one guess as to what happens next.
"Active shields" might suffice against accidental outbreaks of
"grey goo", but not against hardened military-grade nano, perfectly
capable of using fusion weapons to break through active shields. And yet,
despite this threat, we can't even try to suppress nanotechnology; that simply
increases the probability that the villains will get it first. (15).
Mitchell Porter calls it "The race between superweapons and
superintelligence." Human civilization will continue to change until
we either create superintelligence, or wipe ourselves out. Those are the
two stable states, the two "attractors" in the system. It
doesn't matter how long it takes, or how many cycles of nanowar-and-regrowth occur
before Transcendence or final extinction. If the system keeps changing,
over a thousand years, or a million years, or a billion years, it will
eventually wind up in one attractor or the other. But my best guess is
that the issue will be settled now.
Nor is the possibility of destruction the only reason for racing to
Singularity. There is also the ongoing sum of human misery, which is not
only a practical problem, not only an ethical problem, but a purely moral
problem in its own right. There are truly horrible things going on in the
world today. If I had the choice of erasing crack neighborhoods or
erasing the Holocaust, I don't know which I'd pick. I do know which
project has a better chance of success.
Have you ever pondered the Great Questions of Life, the Universe, and
Everything? Have you ever wondered whether it really matters, cosmically
speaking, if you stay in bed this morning? Have you ever stared into the
hard problem of ethics, or consciousness, or reality, and realized that there is
no humanly-understandable justification for subjective experience, getting out
of bed, or anything existing at all? How can we do anything, set any
goals, without knowing the Meaning of Life? How can we justify our
continued participation in the rat race if we don't know why we're
running? What's it all for?
We don't know. We have to guess, and act on our best guesses.
Regardless of the absolute probabilities, superintelligence has a better
chance of discovering the true moral right, having the power to implement it,
and wanting to implement it. The state where superintelligence exists is,
with a very high degree of probability regardless of the True Meaning of Life,
preferable to the current state. That's the Interim Meaning of Life, and
it works well enough... but it's a long, long way from certainty, or really
knowing what's going on!
I have had it. I have had it with crack houses, dictatorships,
torture chambers, disease, old age, spinal paralysis, and world hunger. I
have had it with a planetary death rate of 150,000 sentient beings per day.
I have had it with this planet. I have had it with mortality. None
of this is necessary. The time has come to stop turning away from the
mugging on the corner, the beggar on the street. It is no longer
necessary to look nervously away, repeating the mantra: "I can't
solve all the problems of the world." We can. We can end
this.
And so I have lost, not my faith, but my suspension of disbelief.
Strange as the Singularity may seem, there are times when it seems much more
reasonable, far less arbitrary, than life as a human. There is a
better way! Why rationalize this life? Why try to pretend that it
makes sense? Why make it seem bright and happy? There is an alternative!
I'm not saying that there isn't fun in this life. There is. But any
amount of sorrow is unacceptable. The time has come to stop hypnotizing
ourselves into believing that pain and unhappiness are desirable! Maybe
perfection isn't attainable, even on the other side of Singularity, but
that doesn't mean that the faults and flaws are okay. The time has
come to stop pretending it doesn't hurt!
Our fellow humans are screaming in pain, our planet will probably be
scorched to a cinder or converted into goo, we don't know what the hell is
going on, and the Singularity will solve these problems. I declare reaching
the Singularity as fast as possible to be the Interim Meaning of Life, the
temporary definition of Good, and the foundation until further notice of my
ethical system.
To quote an earlier version of Staring into the Singularity:
"Probably a lot of researchers on paths to the
Singularity are spending valuable time writing grant proposals, or doing things
that could be done by lab assistants. It would be a fine thing if there
were a Singularity Support Foundation to ensure that these people weren't
distracted. There is probably one researcher alive today - Hofstadter, Drexler, Lenat, Moravec, Goertzel, Chalmers, Quate,
someone just graduating college, or even me
- who is the person who gets to the Singularity. Every hour
that person is delayed is another hour to the Singularity. Every hour,
six thousand people die. Perhaps we should be doing something about this
person's spending a fourth of vis time and energy writing grant
proposals." (16).
This summarizes the basic principle behind accelerating the
Singularity; there is one project, somewhere, that will create
greater-than-human intelligence. That project, probably in the field of
AI, will be backed by advances in three or four other fields, such as cognitive
science, high-speed "ultracomputing" hardware, whatever previous work
there's been in AI, and maybe insight-sources such as BCI (Brain-Computer Interfaces).
The researchers on these projects eat food and wear clothes and watch
television shows that have been produced by the worldwide economy. Any
productive activity, anywhere along the chain, can count as supporting the
Singularity.
To support the Singularity indirectly, you can keep plugging away at your
daily job, at least assuming you're a farmer rather than a class-action
lawyer. Neurologists can study cognitive science, computer programmers
can study AI, and try to be ready when the Singularity needs them. And
the various transhumanist organizations, such as the Extropy Institute and Foresight, are targetable for more
immediate forms of aid.
Which is all good, but some of us would like the opportunity to accelerate
the Singularity - to directly help create a greater-than-human intelligence.
Four years after the publication of Staring 1.0, there is now
officially a Singularity Institute! Coding
has not yet begun on the AI project, but work continues on Coding a Transhuman AI 2, and
when the design document is complete, we will begin coding and we will
write a seed AI. We have - just recently, as of this latest revised
version - received tax-exempt status, and are now accepting donations!
And the clock continues to tick, and another bit of life as we know it burns
away...
I think it's safe to say that I can now visualize a complete path leading up
to the Singularity, I have some idea of what it would take to get there and how
much it will cost, and I think we could probably do it by 2010.
Substantially earlier, given a lot of funding and research problems that turn
out to be tractable.
So the heck with Moore's Law. The Singularity will happen when we go
out and make it happen.
I'd also like to say a few things about how not to get to the
Singularity.
As an earlier version of Staring said, "This page isn't a call
to arms in the ordinary sense." I've always deeply mistrusted the
human tendency to form social organizations. Organizations tend to
perpetuate themselves, rather than solving problems. The Singularity meme
is awesomely powerful. It must not be allowed to fall into the usual, the
easy patterns. The Singularity will not be advanced by a cult, a
mutual admiration society, or a bunch of crackpots. Drexler faced much the same
problem with nanotechnology.
The Principle
of Independence, in the Singularitarian
Principles, is one safeguard. To summarize, the Principle of
Independence renounces the idea that one Singularitarian could possess any form
of authority over another. For my own reasons, I have adopted the
Singularity as a personal goal. If I can be more efficient by working with
other Singularitarians, great. But what I care about is the Singularity,
not Singularitarianism.
Another safeguard is the Principle of
Intelligence, which states that whether an idea is intelligent or stupid
takes logical precedence over whether it's pro- or anti-Singularity.
(You'd think that this would all be blatantly obvious - unless, of course,
you'd ever read a history book, or talked to other humans, or turned on a
television set or something.)
There's another safeguard that isn't in the Principles. It's the idea
I originally wrote Staring into the Singularity to emphasize. It's
this one last piece of advice: Don't go Utopian.
Don't describe Life after Singularity in glowing terms. Don't describe
it at all. I think the all-time low point in predicting the future came
in the few brief paragraphs of Unbounding the Future that I read,
when they described a pedestrian being run over and his hand miraculously
healing. That's ridiculous. Pedestrian? Run over?
Hand? Cars in a nanotech world? Why not just have
a bunch of apes describe the ease of getting bananas with a human mind?
In the words of Drexler:
"I would emphasize that I have been invited to give
talks at places like the physical sciences colloquium series at IBM's main
research center, at Xerox PARC, and so forth, so these ideas are being taken
seriously by serious technical people, but it is a mixed reaction. You want
that reaction to be as positive as possible, so I plead with everyone to please
keep the level of cultishness and bullshit down (17), and even to be rather restrained
in talking about wild consequences, which are in fact true and technically
defensible, because they don't sound that way. People need to have their
thinking grow into longer-term consequences gradually; you don't begin
there."
The problem with people expounding their Utopian visions of
a nanotech world is that their consequences aren't wild enough.
Looking at stories of instantly healing wounds, or any material object being
instantly available, doesn't give you the sense of looking into the future.
It gives you the sense that you're looking into an unimaginative person's
childhood fantasy of omnipotence, and that predisposes you to treat
nanotechnology the same way. Worse, it attracts other people with
unimaginative fantasies of omnipotence. There's no better way to turn
into a bunch of parlor pinks, sipping coffee and planning the Revolution
without actually doing anything.
I suppose I shouldn't be too harsh on the nano-Utopia types. Some of
them may be actual researchers or science-fiction writers or other people doing
useful things; some of them may be rank-and-file sincerely trying to make it
happen who just got caught in the general lack of imagination; and of course,
none of them have been to the Low
Beyond. Once you've read this page, though, there's no excuse.
This page is about staring into the Singularity. It is about awe, the
Beyond, the end of history, and things beyond human comprehension. It is
intended to invoke a sense of future, and I hope that my readers will be
inclined to view nanotechnology, artificial intelligence, neurology, and all
the other paths to the Singularity in the same way - as part of the future.
I hope that attracts the right sort of people.
In a moment of insanity, I subscribed to the Extropian mailing
list. These people know what "Singularity" means. In
theory, they know what's coming. And yet, even as I write [in '96 -
they've improved a bit in '99], folk who really ought to know better are
arguing over whether transhumans will have enough computing power to simulate
private Universes, whether the amount of computing power available to
transhumans is limited by the laws of physics, whether someone uploaded into a
trans-computer is really the same person or just an amazing soybean imitation,
and - least believably of all - whether our unimaginably intelligent future
selves will still be having sex.
Why is this our concern? Why do we need to know this? Can
it not be that maybe, just maybe, these problems can wait until after
we're five times as smart and some of our blind spots have been filled?
Right now, every human being on this planet has one concern: How do we get
to the Singularity as fast as possible? What happens afterward is not
our problem and I deplore those gosh-wow, unimaginative,
so-cloying-they-make-you-throw-up, and just plain boring and
unimaginative pictures of a future with unlimited resources and completely
unaltered mortals. Leave the problems of transhumanity to the
transhumans. Our chances of getting anything right are the same as a
fish designing a working airplane out of algae and pebbles.
Our sole responsibility is to produce something smarter than we are; any
problems beyond that are not ours to solve.
How do we keep the world economy running for at least another ten
years? Who's willing to fund an AI project? Who do we need to
recruit for an AI project? How can we avoid the standard technophobic
backlash? And what do we do if nanotech comes first?
These are the practical questions that will be faced in the immediate
future. The correct questions, and the answers, are the proper
concern of mailing lists.
[And, as of '00, the Singularity Institute.]
I don't object to letting the imagination run free. That's how all this
got started, after all. But don't get so emotionally involved in
it, don't try to claim that your visualization of the Other Side of Dawn has a
chance of being correct, and spend your time making the
Singularity instead.
(Or, check out the
Singularity category from the Open Directory
Project.)
(Or, visit the Singularity Institute.)
Don't forget to click on the below link to watch FIAT EMPIRE - Why the Federal Reserve Violates the U.S. Constitution
so you will have a better understanding of what fuels many problems under study by the Jaeger Research Institute.
Permission is hereby granted to forward, quote, excerpt or publish all or part of this article provided nothing is taken out of context and the source URL is cited. For articles written by James Jaeger, you are welcome to credit yourself as author, provided you at least get this information out. If you wish to be removed from this mailing list, go to http://www.jaegerresearchinstitute.org/mission.htm however, before you do, please be certain you are not suffering from Spamaphobia as addressed at http://home.att.net/~cyberfilms/Journel2.html.
Source URL: http://www.jaegerresearchinstitute.org